63 research outputs found

    Depth Superresolution using Motion Adaptive Regularization

    Full text link
    Spatial resolution of depth sensors is often significantly lower compared to that of conventional optical cameras. Recent work has explored the idea of improving the resolution of depth using higher resolution intensity as a side information. In this paper, we demonstrate that further incorporating temporal information in videos can significantly improve the results. In particular, we propose a novel approach that improves depth resolution, exploiting the space-time redundancy in the depth and intensity using motion-adaptive low-rank regularization. Experiments confirm that the proposed approach substantially improves the quality of the estimated high-resolution depth. Our approach can be a first component in systems using vision techniques that rely on high resolution depth information

    Sparse Recovery from Combined Fusion Frame Measurements

    Full text link
    Sparse representations have emerged as a powerful tool in signal and information processing, culminated by the success of new acquisition and processing techniques such as Compressed Sensing (CS). Fusion frames are very rich new signal representation methods that use collections of subspaces instead of vectors to represent signals. This work combines these exciting fields to introduce a new sparsity model for fusion frames. Signals that are sparse under the new model can be compressively sampled and uniquely reconstructed in ways similar to sparse signals using standard CS. The combination provides a promising new set of mathematical tools and signal models useful in a variety of applications. With the new model, a sparse signal has energy in very few of the subspaces of the fusion frame, although it does not need to be sparse within each of the subspaces it occupies. This sparsity model is captured using a mixed l1/l2 norm for fusion frames. A signal sparse in a fusion frame can be sampled using very few random projections and exactly reconstructed using a convex optimization that minimizes this mixed l1/l2 norm. The provided sampling conditions generalize coherence and RIP conditions used in standard CS theory. It is demonstrated that they are sufficient to guarantee sparse recovery of any signal sparse in our model. Moreover, a probabilistic analysis is provided using a stochastic model on the sparse signal that shows that under very mild conditions the probability of recovery failure decays exponentially with increasing dimension of the subspaces

    Learning Model-Based Sparsity via Projected Gradient Descent

    Full text link
    Several convex formulation methods have been proposed previously for statistical estimation with structured sparsity as the prior. These methods often require a carefully tuned regularization parameter, often a cumbersome or heuristic exercise. Furthermore, the estimate that these methods produce might not belong to the desired sparsity model, albeit accurately approximating the true parameter. Therefore, greedy-type algorithms could often be more desirable in estimating structured-sparse parameters. So far, these greedy methods have mostly focused on linear statistical models. In this paper we study the projected gradient descent with non-convex structured-sparse parameter model as the constraint set. Should the cost function have a Stable Model-Restricted Hessian the algorithm produces an approximation for the desired minimizer. As an example we elaborate on application of the main results to estimation in Generalized Linear Model

    Quantization and erasures in frame representations

    Get PDF
    Thesis (Sc. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2006.This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.Includes bibliographical references (p. 123-126).Frame representations, which correspond to overcomplete generalizations to basis expansions, are often used in signal processing to provide robustness to errors. In this thesis robustness is provided through the use of projections to compensate for errors in the representation coefficients, with specific focus on quantization and erasure errors. The projections are implemented by modifying the unaffected coefficients using an additive term, which is linear in the error. This low-complexity implementation only assumes linear reconstruction using a pre-determined synthesis frame, and makes no assumption on how the representation coefficients are generated. In the context of quantization, the limits of scalar quantization of frame representations are first examined, assuming the analysis is using inner products with the frame vectors. Bounds on the error and the bit-efficiency are derived, demonstrating that scalar quantization of the coefficients is suboptimal. As an alternative to scalar quantization, a generalization of Sigma-Delta noise shaping to arbitrary frame representations is developed by reformulating noise shaping as a sequence of compensations for the quantization error using projections.(cont.) The total error is quantified using both the additive noise model of quantization, and a deterministic upper bound based on the triangle inequality. It is thus shown that the average and the worst-case error is reduced compared to scalar quantization of the coefficients. The projection principle is also used to provide robustness to erasures. Specifically, the case of a transmitter that is aware of the erasure occurrence is considered, which compensates for the erasure error by projecting it to the subsequent frame vectors. It is further demonstrated that the transmitter can be split to a transmitter/receiver combination that performs the same compensation, but in which only the receiver is aware of the erasure occurrence. Furthermore, an algorithm to puncture dense representations in order to produce sparse approximate ones is introduced. In this algorithm the error due to the puncturing is also projected to the span of the remaining coefficients. The algorithm can be combined with quantization to produce quantized sparse representations approximating the original dense representation.by Petros T. Boufounos.Sc.D
    • …
    corecore